Emotion
'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI
'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support. Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders--and the companies building AI. That's according to data from 70 countries, gathered by the Collective Intelligence Project (CIP).
- North America > United States (0.05)
- Europe > France (0.05)
- Africa (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.81)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.55)
- North America > United States > Tennessee > Shelby County > Memphis (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > China > Zhejiang Province (0.04)
- Africa > South Sudan > Greater Upper Nile > Greater Pibor Administrative Area > Boma (0.04)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Information Technology (0.67)
- Information Technology > Human Computer Interaction (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (1.00)
- (3 more...)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Singapore (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Research Report > Experimental Study (1.00)
- Overview (0.67)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.92)
- Government (0.92)
- North America > United States (0.14)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Services (0.46)
Supplementary Materials for Incomplete Multimodality-Diffused Emotion Recognition
In this supplementary material, we first present the details of the conditional score network in Sec. 2. Sec. 4. Finally, we conduct experiments on Chinese MER dataset CH-SIMS [ I) which is subsequently fixed for the model (i.e., not learnable). Table 1: Hyperparameter settings in IMDer.Hyperparameter CMU-MOSI CMU-MOSEI Optimizer Adam Adam Batch size 32 128 Learning rate 0.001 0.002 σ used in our stochastic differential equation 25 25 Number of iterations for Euler-Maruyama solver 500 500 Shallow Feature Extractor Kernel size for E CH-SIMS contains 2281 refined video segments with fine-grained annotations of modalities. For vision modality, we use MultiComp OpenFace2.0 The experimental results are listed in the Tab. 3. Obviously, our proposed IMDer consistently achieves better results than MMIN or GCNet under random missing protocol.
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.41)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.32)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Shandong Province (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
To Err Like Human: Affective Bias-Inspired Measures for Visual Emotion Recognition Evaluation
Accuracy is a commonly adopted performance metric in various classification tasks, which measures the proportion of correctly classified samples among all samples. It assumes equal importance for all classes, hence equal severity for misclassifications. However, in the task of emotional classification, due to the psychological similarities between emotions, misclassifying a certain emotion into one class may be more severe than another, e.g., misclassifying'excitement' as'anger' apparently is more severe than as'awe'. Albeit high meaningful for many applications, metrics capable of measuring these cases of misclassifications in visual emotion recognition tasks have yet to be explored. In this paper, based on Mikel's emotion wheel from psychology, we propose a novel approach for evaluating the performance in visual emotion recognition, which takes into account the distance on the emotion wheel between different emotions to mimic the psychological nuances of emotions. Experimental results in semi-supervised learning on emotion recognition and user study have shown that our proposed metrics is more effective than the accuracy to assess the performance and conforms to the cognitive laws of human emotions.
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (1.00)
E 3 : Exploring Embodied Emotion Through A Large-Scale Egocentric Video Dataset
Understanding human emotions is fundamental to enhancing human-computer interaction, especially for embodied agents that mimic human behavior. Traditional emotion analysis often takes a third-person perspective, limiting the ability of agents to interact naturally and empathetically. To address this gap, this paper presents $E^3$ for Exploring Embodied Emotion, the first massive first-person view video dataset.
- Information Technology > Artificial Intelligence > Natural Language (0.58)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.40)
Incomplete Multimodality-Diffused Emotion Recognition
Human multimodal emotion recognition (MER) aims to perceive and understand human emotions via various heterogeneous modalities, such as language, vision, and acoustic. Compared with unimodality, the complementary information in the multimodalities facilitates robust emotion understanding. Nevertheless, in real-world scenarios, the missing modalities hinder multimodal understanding and result in degraded MER performance. In this paper, we propose an Incomplete Multimodality-Diffused emotion recognition (IMDer) method to mitigate the challenge of MER under incomplete multimodalities. To recover the missing modalities, IMDer exploits the score-based diffusion model that maps the input Gaussian noise into the desired distribution space of the missing modalities and recovers missing data abided by their original distributions. Specially, to reduce semantic ambiguity between the missing and the recovered modalities, the available modalities are embedded as the condition to guide and refine the diffusion-based recovering process. In contrast to previous work, the diffusion-based modality recovery mechanism in IMDer allows to simultaneously reach both distribution consistency and semantic disambiguation. Feature visualization of the recovered modalities illustrates the consistent modality-specific distribution and semantic alignment. Besides, quantitative experimental results verify that IMDer obtains state-of-the-art MER accuracy under various missing modality patterns.